2 research outputs found

    Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

    Get PDF
    One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device

    Subjectively optimised multi-exposure and multi-focus image fusion with compensation for camera shake

    Get PDF
    Multi-exposure image fusion algorithms are used for enhancing the perceptual quality of an image captured by sensors of limited dynamic range. This is achieved by rendering a single scene based on multiple images captured at different exposure times. Similarly, multi-focus image fusion is used when the limited depth of focus on a selected focus setting of a camera results in parts of an image being out of focus. The solution adopted is to fuse together a number of multi-focus images to create an image that is focused throughout. In this paper we propose a single algorithm that can perform both multi-focus and multi-exposure image fusion. This algorithm is a novel approach in which a set of unregistered multiexposure/focus images is first registered before being fused. The registration of images is done via identifying matching key points in constituent images using Scale Invariant Feature Transforms (SIFT). The RANdom SAmple Consensus (RANSAC) algorithm is used to identify inliers of SIFT key points removing outliers that can cause errors in the registration process. Finally we use the Coherent Point Drift algorithm to register the images, preparing them to be fused in the subsequent fusion stage. For the fusion of images, a novel approach based on an improved version of a Wavelet Based Contourlet Transform (WBCT) is used. The experimental results as follows prove that the proposed algorithm is capable of producing HDR, or multi-focus images by registering and fusing a set of multi-exposure or multi-focus images taken in the presence of camera shake
    corecore